The deck’s second major theme is enforcement. Generated tests only matter if they are committed with the feature, run automatically on pull requests, and stored as versioned artifacts that support both engineering review and GxP evidence review.
The workshop frames this as the core habit that turns AI-generated tests into a reliable delivery practice rather than an optional side task.
Use Copilot Chat or Claude Code to create the unit or integration test alongside the feature implementation.
The source file and the test file should move through review as one change set. That keeps intent and validation bound to the same record.
GitHub Actions detects the new tests and starts the run automatically. The deck’s point is zero manual orchestration for the tester.
The agent can summarize which tests were added, their pass/fail status, and the requirement IDs they cover so reviewers see the result in context.
The sample workflow in the deck is simple on purpose: run on pull requests, install dependencies, run the suite, upload artifacts, and comment back to the PR.
The workflow starts when the PR opens or updates against the protected branches. The test run is part of code review, not a separate calendar event.
The runner checks out the repo, installs dependencies, and executes the automated suite with coverage enabled.
Coverage and test outputs are stored on the run so there is durable evidence of exactly what happened for that specific change.
A machine-generated summary makes the run readable inside the pull request without forcing reviewers to open raw logs first.
# Triggers on pull requests targeting main or develop on: pull_request: branches: [main, develop] jobs: test: runs-on: ubuntu-latest steps: - uses: actions/checkout@v4 - name: Install dependencies run: npm ci - name: Run tests with coverage run: npm test -- --coverage # enforces 90% gate in jest.config - name: Upload test artifacts uses: actions/upload-artifact@v4 with: name: test-report-${{ github.sha }} path: coverage/ - name: Comment results on PR uses: actions/github-script@v7 with: script: | github.rest.issues.createComment({ issue_number: context.issue.number, owner: context.repo.owner, repo: context.repo.repo, body: '✅ Tests passed · Coverage: 92% · [REQ-HT-012] ✓' })
The deck’s PR slide makes a narrow but important claim: traceability and evidence should be visible on the change record itself.
Requirement IDs appear in test names, so the relationship between the story and the run output is embedded in the artifact instead of maintained in a separate spreadsheet.
Coverage reports, test logs, and packaged outputs are stored as PR-linked artifacts that can be retrieved after merge.
The workshop insists on structural enforcement. A feature is not “done” if the tests are red, missing, or below the defined threshold.
Add the requirement tag grep to the CI comment step. A one-liner like grep -ro '\[REQ-[A-Z0-9-]*\]' src/__tests__/ gives you a live traceability report without a separate tool.
The GxP slide is one of the deck’s strongest sections because it ties the workflow directly to evidence, traceability, coverage, and change control outcomes.
The deck uses this as the hard example for mature enforcement, with an 80% starting threshold suggested in the rollout plan.
The pull request becomes the place where code, tests, results, and requirement coverage are reviewed together.
The deck’s compliance argument depends on eliminating manual mapping work wherever the test metadata can carry that burden directly.
Page takeaway: the PR pipeline is not just a convenience layer. In the workshop model, it is the mechanism that turns AI-generated tests into controlled, reviewable, versioned evidence.